Unbiased MLMC Stochastic Gradient-Based Optimization of Bayesian Experimental Designs

نویسندگان

چکیده

Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 18 May 2020Accepted: 11 October 2021Published online: 31 January 2022KeywordsBayesian experimental design, expected information gain, multilevel Monte Carlo, nested expectation, stochastic gradient descentAMS Subject Headings62K05, 62L20, 65C05, 92C45, 94A17Publication DataISSN (print): 1064-8275ISSN (online): 1095-7197Publisher: Society for Industrial and Applied MathematicsCODEN: sjoce3

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient-based stochastic optimization methods in Bayesian experimental design

Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this co...

متن کامل

Stochastic Gradient Descent as Approximate Bayesian Inference

Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results. (1) We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distributi...

متن کامل

Bayesian Sampling Using Stochastic Gradient Thermostats

Dynamics-based sampling methods, such as Hybrid Monte Carlo (HMC) and Langevin dynamics (LD), are commonly used to sample target distributions. Recently, such approaches have been combined with stochastic gradient techniques to increase sampling efficiency when dealing with large datasets. An outstanding problem with this approach is that the stochastic gradient introduces an unknown amount of ...

متن کامل

Bayesian Learning via Stochastic Gradient Langevin Dynamics

In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesi...

متن کامل

Continuous-Fidelity Bayesian Optimization with Knowledge Gradient∗

While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search [13] and multi-fidelity BO (e.g. Klein et al. [10]) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO approaches that...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Scientific Computing

سال: 2022

ISSN: ['1095-7197', '1064-8275']

DOI: https://doi.org/10.1137/20m1338848